对抗性训练已被证明是捍卫对抗性例子的最有效的补救措施之一,但通常会遭受在看不见的测试对手身上巨大的稳定性概括差距,被认为是\ emph {对抗性强大的概括性问题}。尽管最初是针对对抗性强大的概括的初步理解,但从建筑的角度来看,知之甚少。因此,本文试图通过系统地检查最具代表性的体系结构(例如,视觉变压器和CNN)来弥合差距。特别是,我们首先对Imagenette和CIFAR-10数据集进行了对抗训练的架构\ Emph {20}对几个对手(多个$ \ ell_p $ -norm -norm对照攻击)的架构,并发现视觉变形金刚(例如,PVT,Coatnet)经常产生更好的对抗性稳定性。为了进一步了解哪种建筑成分有利于对抗性的强大概括,我们深入研究了几个关键的构建块,并通过Rademacher复杂性的镜头揭示了这一事实,即更高的重量稀疏性对更好的对手的视觉变形金刚的强大良好概括有很大贡献,这通常可以实现这一目标,这是可以实现的。通过注意层。我们的广泛研究发现了建筑设计与对抗性稳定的概括之间的密切关系,并实例化了一些重要的见解。我们希望我们的发现可以帮助更好地理解设计强大的深度学习体系结构的机制。
translated by 谷歌翻译
知识蒸馏(KD)显示了其对象检测的有效性,在AI知识(教师检测器)和人类知识(人类专家)的监督下,它在该物体检测中训练紧凑的对象检测器。但是,现有研究一致地对待AI知识和人类知识,并在学习过程中采用统一的数据增强策略,这将导致对多尺度对象的学习有偏见,并且对教师探测器的学习不足,从而导致不满意的蒸馏性能。为了解决这些问题,我们提出了特定于样本的数据增强和对抗性功能增强。首先,为了减轻多尺度对象产生的影响,我们根据傅立叶角度的观察结果提出了自适应数据增强。其次,我们提出了一种基于对抗性示例的功能增强方法,以更好地模仿AI知识以弥补教师探测器的信息不足。此外,我们提出的方法是统一的,并且很容易扩展到其他KD方法。广泛的实验证明了我们的框架的有效性,并在一阶段和两阶段探测器中提高了最先进方法的性能,最多可以带来0.5 MAP的增长。
translated by 谷歌翻译
对抗训练(AT)方法有效地防止对抗性攻击,但它们在不同阶级之间引入了严重的准确性和鲁棒性差异,称为强大的公平性问题。以前建议的公平健壮的学习(FRL)适应重新重量不同的类别以提高公平性。但是,表现良好的班级的表现降低了,导致表现强劲。在本文中,我们在对抗训练中观察到了两种不公平现象:在产生每个类别的对抗性示例(源级公平)和产生对抗性示例时(目标级公平)时产生对抗性示例的不​​同困难。从观察结果中,我们提出平衡对抗训练(BAT)来解决强大的公平问题。关于源阶级的公平性,我们调整了每个班级的攻击强度和困难,以在决策边界附近生成样本,以便更容易,更公平的模型学习;考虑到目标级公平,通过引入统一的分布约束,我们鼓励每个班级的对抗性示例生成过程都有公平的趋势。在多个数据集(CIFAR-10,CIFAR-100和IMAGENETTE)上进行的广泛实验表明,我们的方法可以显着超过其他基线,以减轻健壮的公平性问题(最坏的类精度为+5-10 \%)
translated by 谷歌翻译
大量证据表明,深神经网络(DNN)容易受到后门攻击的影响,这激发了后门检测方法的发展。现有的后门检测方法通常是针对具有单个特定类型(例如基于补丁或基于扰动)的后门攻击而定制的。但是,在实践中,对手可能会产生多种类型的后门攻击,这挑战了当前的检测策略。基于以下事实:对抗性扰动与触发模式高度相关,本文提出了自适应扰动生成(APG)框架,以通过自适应注射对抗性扰动来检测多种类型的后门攻击。由于不同的触发模式在相同的对抗扰动下显示出高度多样的行为,因此我们首先设计了全球到本地策略,以通过调整攻击的区域和预算来适应多种类型的后门触发器。为了进一步提高扰动注入的效率,我们引入了梯度引导的掩模生成策略,以寻找最佳区域以进行对抗攻击。在多个数据集(CIFAR-10,GTSRB,Tiny-Imagenet)上进行的广泛实验表明,我们的方法以大幅度优于最先进的基线(+12%)。
translated by 谷歌翻译
数十亿人每天都在社交媒体上分享他们的日常生活图像。但是,它们的生物识别信息(例如,指纹)可以很容易地从这些图像中偷走。从社交媒体上泄漏的指纹泄漏的威胁引起了人们对匿名分享图像的强烈渴望,同时保持图像质量,因为指纹充当了终生的个体生物识别密码。为了防止指纹泄漏,通过在图像上添加不可察觉的扰动来作为解决方案出现。但是,现有作品要么在黑盒可传输性方面弱,要么显得不自然。由视觉感知层次结构激励(即,高级感知利用模型共享的语义,这些语义在模型中很好地转移,而低水平的感知提取物则是原始刺激的,并且会引起高视觉敏感性的刺激),我们提出了一个层次的感知噪声,注射框架以解决上述问题。对于黑盒可传递性,我们在指纹方向场上注入保护性噪声,以扰动模型共享的高级语义(即指纹脊)。考虑到视觉自然性,我们通过正规化侧向基因核的响应来抑制低级局部对比度刺激。我们的Fingersafe是第一个在数字(最高94.12%)和现实的场景(Twitter和Facebook,高达68.75%)中提供可行的指纹保护的人。我们的代码可以在https://github.com/nlsde-safety-team/fingersafe上找到。
translated by 谷歌翻译
人群计数已被广泛用于估计安全至关重要的场景中的人数,被证明很容易受到物理世界中对抗性例子的影响(例如,对抗性斑块)。尽管有害,但对抗性例子也很有价值,对于评估和更好地理解模型的鲁棒性也很有价值。但是,现有的对抗人群计算的对抗性示例生成方法在不同的黑盒模型之间缺乏强大的可传递性,这限制了它们对现实世界系统的实用性。本文提出了与模型不变特征正相关的事实,本文提出了感知的对抗贴片(PAP)生成框架,以使用模型共享的感知功能来定制对对抗性的扰动。具体来说,我们将一种自适应人群密度加权方法手工制作,以捕获各种模型中不变的量表感知特征,并利用密度引导的注意力来捕获模型共享的位置感知。证明它们都可以提高我们对抗斑块的攻击性转移性。广泛的实验表明,我们的PAP可以在数字世界和物理世界中实现最先进的进攻性能,并且以大幅度的优于以前的提案(最多+685.7 MAE和+699.5 MSE)。此外,我们从经验上证明,对我们的PAP进行的对抗训练可以使香草模型的性能受益,以减轻人群计数的几个实际挑战,包括跨数据集的概括(高达-376.0 MAE和-376.0 MAE和-354.9 MSE)和对复杂背景的鲁棒性(上升)至-10.3 MAE和-16.4 MSE)。
translated by 谷歌翻译
糖尿病性视网膜病(DR)已成为工人衰老人视力障碍的主要原因之一,在全球范围内是一个严重的问题。但是,大多数作品都忽略了标签的序数信息。在这个项目中,我们提出了一种新型设计MTCSNN,这是一种多任务临床暹罗神经网络,用于糖尿病性视网膜病变严重性预测任务。该项目的新颖性是在标签之间利用序数信息并添加新的回归任务,这可以帮助模型学习更多的歧视性特征,以嵌入细粒度的分类任务。我们对视视视视视视视视视reinamnist进行了全面的实验,将MTCSNN与Resnet-18、34、50等其他模型进行了比较。我们的结果表明,MTCSNN的表现优于测试数据集中的AUC和准确性。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译